169 research outputs found

    Binary domain generalization for sparsifying binary neural networks

    Full text link
    Binary neural networks (BNNs) are an attractive solution for developing and deploying deep neural network (DNN)-based applications in resource constrained devices. Despite their success, BNNs still suffer from a fixed and limited compression factor that may be explained by the fact that existing pruning methods for full-precision DNNs cannot be directly applied to BNNs. In fact, weight pruning of BNNs leads to performance degradation, which suggests that the standard binarization domain of BNNs is not well adapted for the task. This work proposes a novel more general binary domain that extends the standard binary one that is more robust to pruning techniques, thus guaranteeing improved compression and avoiding severe performance losses. We demonstrate a closed-form solution for quantizing the weights of a full-precision network into the proposed binary domain. Finally, we show the flexibility of our method, which can be combined with other pruning strategies. Experiments over CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate efficient sparse networks with reduced memory usage and run-time latency, while maintaining performance.Comment: Accepted as conference paper at ECML PKDD 202

    Elastic Registration of Geodesic Vascular Graphs

    Get PDF
    Vascular graphs can embed a number of high-level features, from morphological parameters, to functional biomarkers, and represent an invaluable tool for longitudinal and cross-sectional clinical inference. This, however, is only feasible when graphs are co-registered together, allowing coherent multiple comparisons. The robust registration of vascular topologies stands therefore as key enabling technology for group-wise analyses. In this work, we present an end-to-end vascular graph registration approach, that aligns networks with non-linear geometries and topological deformations, by introducing a novel overconnected geodesic vascular graph formulation, and without enforcing any anatomical prior constraint. The 3D elastic graph registration is then performed with state-of-the-art graph matching methods used in computer vision. Promising results of vascular matching are found using graphs from synthetic and real angiographies. Observations and future designs are discussed towards potential clinical applications

    In vitro and in vivo comparison of the anti-staphylococcal efficacy of generic products and the innovator of oxacillin

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Oxacillin continues to be an important agent in the treatment of staphylococcal infections; many generic products are available and the only requirement for their approval is demonstration of pharmaceutical equivalence. We tested the assumption that pharmaceutical equivalence predicts therapeutic equivalence by comparing 11 generics with the innovator product in terms of concentration of the active pharmaceutical ingredient (API), minimal inhibitory (MIC) and bactericidal concentrations (MBC), and antibacterial efficacy in the neutropenic mouse thigh infection model.</p> <p>Methods</p> <p>The API in each product was measured by a validated microbiological assay and compared by slope (potency) and intercept (concentration) analysis of linear regressions. MIC and MBC were determined by broth microdilution according to Clinical and Laboratory Standard Institute (CLSI) guidelines. For in vivo efficacy, neutropenic ICR mice were inoculated with a clinical strain of <it>Staphylococcus aureus</it>. The animals had 4.14 ± 0.18 log<sub>10 </sub>CFU/thigh when treatment started. Groups of 10 mice per product received a total dose ranging from 2.93 to 750 mg/kg per day administered q1h. Sigmoidal dose-response curves were generated by nonlinear regression fitted to Hill equation to compute maximum effect (E<sub>max</sub>), slope (N), and the effective dose reaching 50% of the E<sub>max </sub>(ED<sub>50</sub>). Based on these results, bacteriostatic dose (BD) and dose needed to kill the first log of bacteria (1LKD) were also determined.</p> <p>Results</p> <p>4 generic products failed pharmaceutical equivalence due to significant differences in potency; however, all products were undistinguishable from the innovator in terms of MIC and MBC. Independently of their status with respect to pharmaceutical equivalence or in vitro activity, all generics failed therapeutic equivalence in vivo, displaying significantly lower E<sub>max </sub>and requiring greater BD and 1LKD, or fitting to a non-sigmoidal model.</p> <p>Conclusions</p> <p>Pharmaceutical or in vitro equivalence did not entail therapeutic equivalence for oxacillin generic products, indicating that criteria for approval deserve review to include evaluation of in vivo efficacy.</p

    Do Deep Neural Networks Contribute to Multivariate Time Series Anomaly Detection?

    Full text link
    Anomaly detection in time series is a complex task that has been widely studied. In recent years, the ability of unsupervised anomaly detection algorithms has received much attention. This trend has led researchers to compare only learning-based methods in their articles, abandoning some more conventional approaches. As a result, the community in this field has been encouraged to propose increasingly complex learning-based models mainly based on deep neural networks. To our knowledge, there are no comparative studies between conventional, machine learning-based and, deep neural network methods for the detection of anomalies in multivariate time series. In this work, we study the anomaly detection performance of sixteen conventional, machine learning-based and, deep neural network approaches on five real-world open datasets. By analyzing and comparing the performance of each of the sixteen methods, we show that no family of methods outperforms the others. Therefore, we encourage the community to reincorporate the three categories of methods in the anomaly detection in multivariate time series benchmarks

    Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning

    Get PDF
    Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes. To address these problems, we propose a novel deep learning-based framework for interactive segmentation by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine-tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine-tuning. We applied this framework to two applications: 2D segmentation of multiple organs from fetal MR slices, where only two types of these organs were annotated for training; and 3D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only tumor cores in one MR sequence were annotated for training. Experimental results show that 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine-tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.Comment: 11 pages, 11 figure
    • …
    corecore